48,870 research outputs found

    Quantization of the open string on plane-wave limits of dS_n x S^n and non-commutativity outside branes

    Get PDF
    The open string on the plane-wave limit of dSn×SndS_n\times S^n with constant B2B_2 and dilaton background fields is canonically quantized. This entails solving the classical equations of motion for the string, computing the symplectic form, and defining from its inverse the canonical commutation relations. Canonical quantization is proved to be perfectly suited for this task, since the symplectic form is unambiguously defined and non-singular. The string position and the string momentum operators are shown to satisfy equal-time canonical commutation relations. Noticeably the string position operators define non-commutative spaces for all values of the string world-sheet parameter \sig, thus extending non-commutativity outside the branes on which the string endpoints may be assumed to move. The Minkowski spacetime limit is smooth and reproduces the results in the literature, in particular non-commutativity gets confined to the endpoints.Comment: 31 pages, 12p

    Regularization and Renormalization of Chern-Simons Theory

    Get PDF
    We analyze some features of the perturbative quantization of Chern-Simons theory (CST) in the Landau gauge. In this gauge the theory is known to be perturbatively finite. We consider the renormalization scheme in which the renormalized parameter kk equals the bare or classical one and show that it constitutes a natural parametrization for the quantum theory. The reason is that, although in this renormalization scheme the value of the Green functions depends on the regularization used, comparison among different regularization methods shows that the observables (Wilson loops) are the same function of the shifted monodromy parameter k+cvk+c_v for all BRS invariant regulators used so far for CST. We also discuss a particular BRS invariant regularization prescription in which CST is perturbatively defined as the large mass limit of dimensionally regularized topologically massive Yang-Mills theory. With this regularization prescription the radiative corrections induced by two-loop contributions do not entail observable consequences since they can be reabsorbed by a finite rescaling of the fields only. This very mechanism is conjectured to take place at higher perturbative orders. Talk presented by G.G. at the NATO AWR on ``Low dimensional Topology and Quantum Field Theory'', 6-13 September 1992, Cambridge (UK).Comment: 10 pages, Phyzzx, LPTHE 92-4

    Study of a model for the distribution of wealth

    Full text link
    An equation for the evolution of the distribution of wealth in a population of economic agents making binary transactions with a constant total amount of "money" has recently been proposed by one of us (RLR). This equation takes the form of an iterated nonlinear map of the distribution of wealth. The equilibrium distribution is known and takes a rather simple form. If this distribution is such that, at some time, the higher momenta of the distribution exist, one can find exactly their law of evolution. A seemingly simple extension of the laws of exchange yields also explicit iteration formulae for the higher momenta, but with a major difference with the original iteration because high order momenta grow indefinitely. This provides a quantitative model where the spreading of wealth, namely the difference between the rich and the poor, tends to increase with time.Comment: 12 pages, 2 figure

    Does X(3872)X(3872) count?

    Full text link
    The question on whether or not weakly bound states should be effectively incorporated in a hadronic representation of the QCD partition function is addressed by analyzing the example of the X(3872)X(3872), a resonance close to the DDˉ∗D\bar D^* threshold which has been suggested as an example of a loosely bound molecule. This can be decided by studying the DDˉ∗D \bar D^* scattering phase-shifts in the JPC=1++J^{PC}=1^{++} channel and their contribution to the level density in the continuum, which also gives information on its abundance in a hot medium. In this work, it is shown that, in a purely molecular picture, the bound state contribution cancels the continuum, resulting in a null occupation number density at finite temperature, which implies the X(3872)X(3872) does not count below the Quark-Gluon Plasma crossover (T∼150T \sim 150MeV). However, if a non-zero ccˉc \bar c component is present in the X(3872)X(3872) wave function such cancellation does not occur for temperatures above T≳250T\gtrsim 250MeV.Comment: 4 pages, 2 figures. XVII International Conference on Hadron Spectroscopy and Structur

    How long has NICE taken to produce Technology Appraisal guidance? A retrospective study to estimate predictors of time to guidance.

    Get PDF
    OBJECTIVES: To assess how long the UK's National Institute for Health and Clinical Excellence's (NICE) Technology Appraisal Programme has taken to produce guidance and to determine independent predictors of time to guidance. DESIGN: Retrospective time to event (survival) analysis. SETTING: Technology Appraisal guidance produced by NICE. DATASOURCE: All appraisals referred to NICE by February 2010 were included, except those referred prior to 2001 and a number that were suspended. OUTCOME MEASURE: Duration from the start of an appraisal (when the scope document was released) until publication of guidance. RESULTS: Single Technology Appraisals (STAs) were published significantly faster than Multiple Technology Appraisals (MTAs) with median durations of 48.0 (IQR; 44.3-75.4) and 74.0 (IQR; 60.9-114.0) weeks, respectively (p <0.0001). Median time to publication exceeded published process timelines, even after adjusting for appeals. Results from the modelling suggest that STAs published guidance significantly faster than MTAs after adjusting for other covariates (by 36.2 weeks (95% CI -46.05 to -26.42 weeks)) and that appeals against provisional guidance significantly increased the time to publication (by 42.83 weeks (95% CI 35.50 to 50.17 weeks)). There was no evidence that STAs of cancer-related technologies took longer to complete compared with STAs of other technologies after adjusting for potentially confounding variables and only weak evidence suggesting that the time to produce guidance is increasing each year (by 1.40 weeks (95% CI -0.35 to 2.94 weeks)). CONCLUSIONS: The results from this study suggest that the STA process has resulted in significantly faster guidance compared with the MTA process irrespective of the topic, but that these gains are lost if appeals are made against provisional guidance. While NICE processes continue to evolve over time, a trade-off might be that decisions take longer but at present there is no evidence of a significant increase in duration

    Multiplicative Lidskii's inequalities and optimal perturbations of frames

    Get PDF
    In this paper we study two design problems in frame theory: on the one hand, given a fixed finite frame \cF for \hil\cong\C^d we compute those dual frames \cG of \cF that are optimal perturbations of the canonical dual frame for \cF under certain restrictions on the norms of the elements of \cG. On the other hand, for a fixed finite frame \cF=\{f_j\}_{j\in\In} for \hil we compute those invertible operators VV such that V∗VV^*V is a perturbation of the identity and such that the frame V\cdot \cF=\{V\,f_j\}_{j\in\In} - which is equivalent to \cF - is optimal among such perturbations of \cF. In both cases, optimality is measured with respect to submajorization of the eigenvalues of the frame operators. Hence, our optimal designs are minimizers of a family of convex potentials that include the frame potential and the mean squared error. The key tool for these results is a multiplicative analogue of Lidskii's inequality in terms of log-majorization and a characterization of the case of equality.Comment: 22 page

    Sterile neutrino decay and the LSND experiment

    Full text link
    We propose a new explanation of the intriguing LSND evidence for electron antineutrino appearance in terms of heavy (mostly sterile) neutrino decay via a coupling with a light scalar and light (mostly active) neutrinos. We perform a fit to the LSND data, as well as all relevant null-result experiments, taking into account the distortion of the spectrum due to decay. By requiring a coupling g ~ 10^{-5}, a heavy neutrino mass m_4 ~ 100 keV and a mixing with muon neutrinos |U_{mu 4}|^2 ~ 10^{-2}, we show that this model explains all existing data evading constraints that disfavor standard (3+1) neutrino models.Comment: 3pp. Talk given at 9th International Conference on Astroparticle and Underground Physics (TAUP 2005), Zaragoza, Spain, 10-14 Sep 200
    • …
    corecore